Temporal exponential random graph models (TERGM) are powerful statistical models that can be used to infer the temporal pattern of edge formation and elimination in complex networks (e.g., social networks). TERGMs can also be used in a generative capacity to predict longitudinal time series data in these evolving graphs. However, parameter estimation within this framework fails to capture many real-world properties of social networks, including: triadic relationships, small world characteristics, and social learning theories which could be used to constrain the probabilistic estimation of dyadic covariates. Here, we propose triadic temporal exponential random graph models (TTERGM) to fill this void, which includes these hierarchical network relationships within the graph model. We represent social network learning theory as an additional probability distribution that optimizes Markov chains in the graph vector space. The new parameters are then approximated via Monte Carlo maximum likelihood estimation. We show that our TTERGM model achieves improved fidelity and more accurate predictions compared to several benchmark methods on GitHub network data.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
This study focuses on embodied agents that can follow natural language instructions to complete complex tasks in a visually-perceived environment. Existing methods rely on a large amount of (instruction, gold trajectory) pairs to learn a good policy. The high data cost and poor sample efficiency prevents the development of versatile agents that are capable of many tasks and can learn new tasks quickly. In this work, we propose a novel method, LLM-Planner, that harnesses the power of large language models (LLMs) such as GPT-3 to do few-shot planning for embodied agents. We further propose a simple but effective way to enhance LLMs with physical grounding to generate plans that are grounded in the current environment. Experiments on the ALFRED dataset show that our method can achieve very competitive few-shot performance, even outperforming several recent baselines that are trained using the full training data despite using less than 0.5% of paired training data. Existing methods can barely complete any task successfully under the same few-shot setting. Our work opens the door for developing versatile and sample-efficient embodied agents that can quickly learn many tasks.
translated by 谷歌翻译
Shape can specify key object constraints, yet existing text-to-image diffusion models ignore this cue and synthesize objects that are incorrectly scaled, cut off, or replaced with background content. We propose a training-free method, Shape-Guided Diffusion, which uses a novel Inside-Outside Attention mechanism to constrain the cross-attention (and self-attention) maps such that prompt tokens (and pixels) referring to the inside of the shape cannot attend outside the shape, and vice versa. To demonstrate the efficacy of our method, we propose a new image editing task where the model must replace an object specified by its mask and a text prompt. We curate a new ShapePrompts benchmark based on MS-COCO and achieve SOTA results in shape faithfulness, text alignment, and realism according to both quantitative metrics and human preferences. Our data and code will be made available at https://shape-guided-diffusion.github.io.
translated by 谷歌翻译
复杂的伤口通常会面临部分或完全损失皮肤厚度,从而通过次要意图愈合。它们可以是急性或慢性的,可以发现感染,缺血和组织坏死以及与全身性疾病的关联。全球研究机构报告了无数案件,最终涉及严重的公共卫生问题,因为它们涉及人力资源(例如医师和医疗保健专业人员),并对生活质量产生负面影响。本文提出了一个新的数据库,用于自动将复杂伤口自动分类为五个类别,即非缠绕区域,肉芽,纤维蛋白样组织和干性坏死,血肿。这些图像包括由压力,血管溃疡,糖尿病,燃烧和手术干预后的并发症引起的复杂伤口的不同情况。该数据集(称为ComplexWoundDB)是独一无二的,因为它可以从野外获得的27美元图像中的像素级分类,即在患者的房屋中收集图像,并由四名卫生专业人员标记。用不同的机器学习技术进行的进一步实验证明了解决计算机辅助复杂伤口组织分类问题的挑战。手稿阐明了该地区未来的方向,在文献中广泛使用的其他数据库中进行了详细比较。
translated by 谷歌翻译
我们介绍了队列舒适模型,这是一个新框架,用于预测新乘员如何看待其热环境。队列舒适模型利用从样本人群中收集的历史数据,这些数据具有一些潜在的偏好相似性,以预测新居民的热偏好反应。我们的框架能够利用可用的背景信息,例如物理特征和一次性的登机调查(对生活尺度的满意度,高度敏感的人尺度,五个个性特征)以及新乘员以及生理和环境传感器的测量值与热偏好响应配对。我们在两个公开可用的数据集中实施了框架,其中包含来自55人的纵向数据,其中包括6,000多个单独的热舒适调查。我们观察到,使用背景信息的队列舒适模型几乎没有变化的热偏好预测性能,但没有使用历史数据。另一方面,使用队列舒适模型的每个数据集占用人群的一半和三分之一的占用人群,而目标居民的历史数据较少,同类舒适模型将其热偏好预测增加了8〜 \%,平均为5〜 \%与对整个乘员人群进行训练的通用模型相比,某些乘员最多可容纳36点\%和46〜%。该框架以数据和站点不可知的方式呈现,其不同的组件很容易根据乘员和建筑物的数据可用性定制。队列舒适模型可能是迈向个性化的重要一步,而无需为每个新乘员开发个性化模型。
translated by 谷歌翻译
现有3D网格模型的新型纹理合成是迈向现有模拟器的照片现实资产产生的重要一步。但是现有方法固有地在2D图像空间中起作用,这是从给定的摄像头的角度来看3D空间的投影。这些方法采用摄像头角度,3D模型信息,照明信息并生成逼真的2D图像。为了从另一个角度或照明产生一个逼真的图像,我们需要每次更改参数时进行计算上昂贵的远程通过。同样,很难为可以满足时间约束的模拟器生成此类图像,图像的序列应相似,但只需要根据需要更改照明的观点。该解决方案不能直接与搅拌机和虚幻引擎等现有工具集成。手动解决方案是昂贵且耗时的。因此,我们提出了一个称为Graph生成对抗网络(GGAN)的新系统,该系统可以生成纹理,可以将其直接集成到给定的3D网格模型中,该模型使用Blender和Unreal Engine之类的工具,可以轻松地从任何角度和照明条件进行模拟。
translated by 谷歌翻译
我们研究了$ \ Mathcal {r} $的结构和统计属性 - 规范最小化由特定目标函数标记的数据集的内侧插值。$ \ MATHCAL {R} $ - 标准是两层神经网络的电感偏差的基础,最近引入了捕获网络权重大小的功能效果,与网络宽度无关。我们发现,即使有适合数据的脊函数,这些插值也是本质上的多元功能,而且$ \ Mathcal {r} $ - 规范归纳偏见不足以实现某些学习问题的统计上最佳概括。总的来说,这些结果为与实际神经网络训练有关的感应偏见提供了新的启示。
translated by 谷歌翻译
大型语言模型已被证明可以使用少量学习来实现各种自然语言任务的出色表现,这大大减少了将模型调整到特定应用程序所需的特定任务培训示例的数量。为了进一步了解量表对少量学习的影响,我们培训了一个5400亿个参数,密集激活的变压器语言模型,我们称之为“途径”语言模型棕榈。我们使用Pathways在6144 TPU V4芯片上训练了Palm,这是一种新的ML系统,可在多个TPU POD上进行高效的训练。我们通过在数百种语言理解和产生基准的基准方面实现最先进的学习结果来证明扩展的持续好处。在这些任务中,Palm 540B实现了突破性的表现,在一系列多步推理任务上表现出色,超过了最新的最新表现,并且在最近发布的Big Benchmark上表现优于平均人类表现。大量的大型基础任务显示出与模型量表的不连续改进,这意味着当我们扩展到最大模型时,性能急剧增加。 Palm在多语言任务和源代码生成方面也具有很强的功能,我们在各种基准测试中证明了这一点。我们还提供了有关偏见和毒性的全面分析,并研究了训练数据记忆的程度,相对于模型量表。最后,我们讨论与大语言模型有关的道德考虑,并讨论潜在的缓解策略。
translated by 谷歌翻译
大型语言模型可以编码有关世界的大量语义知识。这种知识对于旨在采取自然语言表达的高级,时间扩展的指示的机器人可能非常有用。但是,语言模型的一个重大弱点是,它们缺乏现实世界的经验,这使得很难利用它们在给定的体现中进行决策。例如,要求语言模型描述如何清洁溢出物可能会导致合理的叙述,但是它可能不适用于需要在特定环境中执行此任务的特定代理商(例如机器人)。我们建议通过预处理的技能来提供现实世界的基础,这些技能用于限制模型以提出可行且在上下文上适当的自然语言动作。机器人可以充当语​​言模型的“手和眼睛”,而语言模型可以提供有关任务的高级语义知识。我们展示了如何将低级技能与大语言模型结合在一起,以便语言模型提供有关执行复杂和时间扩展说明的过程的高级知识,而与这些技能相关的价值功能则提供了连接必要的基础了解特定的物理环境。我们在许多现实世界的机器人任务上评估了我们的方法,我们表明了对现实世界接地的需求,并且这种方法能够在移动操纵器上完成长远,抽象的自然语言指令。该项目的网站和视频可以在https://say-can.github.io/上找到。
translated by 谷歌翻译